85 research outputs found

    Dynamic importance sampling in Bayesian networks based on probability trees

    Get PDF
    In this paper we introduce a new dynamic importance sampling propagation algorithm for Bayesian networks. Importance sampling is based on using an auxiliary sampling distribution from which a set of configurations of the variables in the network is drawn, and the performance of the algorithm depends on the variance of the weights associated with the simulated configurations. The basic idea of dynamic importance sampling is to use the simulation of a configuration to modify the sampling distribution in order to improve its quality and so reducing the variance of the future weights. The paper shows that this can be achieved with a low computational effort. The experiments carried out show that the final results can be very good even in the case that the initial sampling distribution is far away from the optimum. 2004 Elsevier Inc. All rights reserved.Spanish Ministry of Science and Technology, project Elvira II (TIC2001-2973-C05-01 and 02

    Dynamic Importance Sampling in Bayesian Networks Based on Probability Trees

    Get PDF
    In this paper we introduce a new dynamic importance sampling propagation algorithm for Bayesian networks. Importance sampling is based on using an auxiliary sampling distribution from which a set of con gurations of the variables in the network is drawn, and the performance of the algorithm depends on the variance of the weights associated with the simulated con gurations. The basic idea of dynamic importance sampling is to use the simulation of a con guration to modify the sampling distribution in order to improve its quality and so reducing the variance of the future weights. The paper shows that this can be achieved with a low computational effort. The experiments carried out show that the nal results can be very good even in the case that the initial sampling distribution is far away from the optimum

    Maximum of entropy for belief intervals under Evidence Theory

    Get PDF
    The Dempster-Shafer Theory (DST) or Evidence Theory has been commonly used to deal with uncertainty. It is based on the basic probability assignment concept (BPA). The upper entropy on the credal set associated with a BPA is the only uncertainty measure in DST that verifies all the necessary mathematical properties and behaviors. Nonetheless, its computation is notably complex. For this reason, many alternatives to this measure have been recently proposed, but they do not satisfy most of the mathematical requirements and present some undesirable behaviors. Belief intervals have been frequently employed to quantify uncertainty in DST in the last years, and they can represent the uncertainty-basedinformation better than a BPA. In this research, we develop a new uncertainty measure that consists of the maximum of entropy on the credal set corresponding to belief intervals for singletons. It verifies all the crucial mathematical requirements and presents good behavior, solving most of the shortcomings found in uncertainty measures proposed recently. Moreover, its calculation is notably easier than the upper entropy on the credal set associated with the BPA. Therefore, our proposed uncertainty measure is more suitable to be used in practical applications.Spanish Ministerio de Economia y Competitividad TIN2016-77902-C3-2-PEuropean Union (EU) TEC2015-69496-

    Required mathematical properties and behaviors of uncertainty measures on belief intervals

    Get PDF
    The Dempster–Shafer theory of evidence (DST) has been widely used to handle uncertainty‐based information. It is based on the concept of basic probability assignment (BPA). Belief intervals are easier to manage than a BPA to represent uncertainty‐based information. For this reason, several uncertainty measures for DST recently proposed are based on belief intervals. In this study, we carry out a study about the crucial mathematical properties and behavioral requirements that must be verified by every uncertainty measure on belief intervals. We base on the study previously carried out for uncertainty measures on BPAs. Furthermore, we analyze which of these properties are satisfied by each one of the uncertainty measures on belief intervals proposed so far. Such a comparative analysis shows that, among these measures, the maximum of entropy on the belief intervals is the most suitable one to be employed in practical applications since it is the only one that satisfies all the required mathematical properties and behaviors

    Upgrading the Fusion of Imprecise Classifiers

    Get PDF
    Imprecise classification is a relatively new task within Machine Learning. The difference with standard classification is that not only is one state of the variable under study determined, a set of states that do not have enough information against them and cannot be ruled out is determined as well. For imprecise classification, a mode called an Imprecise Credal Decision Tree (ICDT) that uses imprecise probabilities and maximum of entropy as the information measure has been presented. A difficult and interesting task is to show how to combine this type of imprecise classifiers. A procedure based on the minimum level of dominance has been presented; though it represents a very strong method of combining, it has the drawback of an important risk of possible erroneous prediction. In this research, we use the second-best theory to argue that the aforementioned type of combination can be improved through a new procedure built by relaxing the constraints. The new procedure is compared with the original one in an experimental study on a large set of datasets, and shows improvement.UGR-FEDER funds under Project A-TIC-344-UGR20FEDER/Junta de Andalucía-Consejería de Transformación Económica, Industria, Conocimiento y Universidades” under Project P20_0015

    Penniless propagation in join trees

    Get PDF

    A Monte-Carlo Algorithm for Probabilistic Propagation in Belief Networks based on Importance Sampling and Stratified Simulation Techniques

    Get PDF
    A class of Monte Carlo algorithms for probability propagation in belief networks is given. The simulation is based on a two steps procedure. The first one is a node deletion technique to calculate the ’a posteriori’ distribution on a variable, with the particularity that when exact computations are too costly, they are carried out in an approximate way. In the second step, the computations done in the first one are used to obtain random configurations for the variables of interest. These configurations are weighted according to the importance sampling methodology. Different particular algorithms are obtained depending on the approximation procedure used in the first step and in the way of obtaining the random configurations. In this last case, a stratified sampling technique is used, which has been adapted to be applied to very large networks without problems with round-off errors

    New strategies for finding multiplicative decompositions of probability trees

    Get PDF
    Probability trees are a powerful data structure for representing probabilistic potentials. However, their complexity can become intractable if they represent a probability distribution over a large set of variables. In this paper, we study the problem of decomposing a probability tree as a product of smaller trees, with the aim of being able to handle bigger probabilistic potentials. We propose exact and approximate approaches and evaluate their behaviour through an extensive set of experiments

    Computation of Kullback–Leibler Divergence in Bayesian Networks

    Get PDF
    Kullback–Leibler divergence KL(p, q) is the standard measure of error when we have a true probability distribution p which is approximate with probability distribution q. Its efficient computation is essential in many tasks, as in approximate computation or as a measure of error when learning a probability. In high dimensional probabilities, as the ones associated with Bayesian networks, a direct computation can be unfeasible. This paper considers the case of efficiently computing the Kullback–Leibler divergence of two probability distributions, each one of them coming from a different Bayesian network, which might have different structures. The paper is based on an auxiliary deletion algorithm to compute the necessary marginal distributions, but using a cache of operations with potentials in order to reuse past computations whenever they are necessary. The algorithms are tested with Bayesian networks from the bnlearn repository. Computer code in Python is provided taking as basis pgmpy, a library for working with probabilistic graphical models.Spanish Ministry of Education and Science under project PID2019-106758GB-C31European Regional Development Fund (FEDER
    corecore